Search Results

Documents authored by Röglin, Heiko


Document
Track A: Algorithms, Complexity and Games
Connected k-Center and k-Diameter Clustering

Authors: Lukas Drexler, Jan Eube, Kelin Luo, Heiko Röglin, Melanie Schmidt, and Julian Wargalla

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Motivated by an application from geodesy, we study the connected k-center problem and the connected k-diameter problem. These problems arise from the classical k-center and k-diameter problems by adding a side constraint. For the side constraint, we are given an undirected connectivity graph G on the input points, and a clustering is now only feasible if every cluster induces a connected subgraph in G. Usually in clustering problems one assumes that the clusters are pairwise disjoint. We study this case but additionally also the case that clusters are allowed to be non-disjoint. This can help to satisfy the connectivity constraints. Our main result is an O(1)-approximation algorithm for the disjoint connected k-center and k-diameter problem for Euclidean spaces of low dimension (constant d) and for metrics with constant doubling dimension. For general metrics, we get an O(log²k)-approximation. Our algorithms work by computing a non-disjoint connected clustering first and transforming it into a disjoint connected clustering. We complement these upper bounds by several upper and lower bounds for variations and special cases of the model.

Cite as

Lukas Drexler, Jan Eube, Kelin Luo, Heiko Röglin, Melanie Schmidt, and Julian Wargalla. Connected k-Center and k-Diameter Clustering. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{drexler_et_al:LIPIcs.ICALP.2023.50,
  author =	{Drexler, Lukas and Eube, Jan and Luo, Kelin and R\"{o}glin, Heiko and Schmidt, Melanie and Wargalla, Julian},
  title =	{{Connected k-Center and k-Diameter Clustering}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.50},
  URN =		{urn:nbn:de:0030-drops-181024},
  doi =		{10.4230/LIPIcs.ICALP.2023.50},
  annote =	{Keywords: Approximation algorithms, Clustering, Connectivity constraints}
}
Document
The Price of Hierarchical Clustering

Authors: Anna Arutyunova and Heiko Röglin

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
Hierarchical Clustering is a popular tool for understanding the hereditary properties of a data set. Such a clustering is actually a sequence of clusterings that starts with the trivial clustering in which every data point forms its own cluster and then successively merges two existing clusters until all points are in the same cluster. A hierarchical clustering achieves an approximation factor of α if the costs of each k-clustering in the hierarchy are at most α times the costs of an optimal k-clustering. We study as cost functions the maximum (discrete) radius of any cluster (k-center problem) and the maximum diameter of any cluster (k-diameter problem). In general, the optimal clusterings do not form a hierarchy and hence an approximation factor of 1 cannot be achieved. We call the smallest approximation factor that can be achieved for any instance the price of hierarchy. For the k-diameter problem we improve the upper bound on the price of hierarchy to 3+2√2≈ 5.83. Moreover we significantly improve the lower bounds for k-center and k-diameter, proving a price of hierarchy of exactly 4 and 3+2√2, respectively.

Cite as

Anna Arutyunova and Heiko Röglin. The Price of Hierarchical Clustering. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 10:1-10:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{arutyunova_et_al:LIPIcs.ESA.2022.10,
  author =	{Arutyunova, Anna and R\"{o}glin, Heiko},
  title =	{{The Price of Hierarchical Clustering}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{10:1--10:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.10},
  URN =		{urn:nbn:de:0030-drops-169487},
  doi =		{10.4230/LIPIcs.ESA.2022.10},
  annote =	{Keywords: Hierarchical Clustering, approximation Algorithms, k-center Problem}
}
Document
Minimum-Error Triangulations for Sea Surface Reconstruction

Authors: Anna Arutyunova, Anne Driemel, Jan-Henrik Haunert, Herman Haverkort, Jürgen Kusche, Elmar Langetepe, Philip Mayer, Petra Mutzel, and Heiko Röglin

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
We apply state-of-the-art computational geometry methods to the problem of reconstructing a time-varying sea surface from tide gauge records. Our work builds on a recent article by Nitzke et al. (Computers & Geosciences, 157:104920, 2021) who have suggested to learn a triangulation D of a given set of tide gauge stations. The objective is to minimize the misfit of the piecewise linear surface induced by D to a reference surface that has been acquired with satellite altimetry. The authors restricted their search to k-order Delaunay (k-OD) triangulations and used an integer linear program in order to solve the resulting optimization problem. In geometric terms, the input to our problem consists of two sets of points in ℝ² with elevations: a set 𝒮 that is to be triangulated, and a set ℛ of reference points. Intuitively, we define the error of a triangulation as the average vertical distance of a point in ℛ to the triangulated surface that is obtained by interpolating elevations of 𝒮 linearly in each triangle. Our goal is to find the triangulation of 𝒮 that has minimum error with respect to ℛ. In our work, we prove that the minimum-error triangulation problem is NP-hard and cannot be approximated within any multiplicative factor in polynomial time unless P = NP. At the same time we show that the problem instances that occur in our application (considering sea level data from several hundreds of tide gauge stations worldwide) can be solved relatively fast using dynamic programming when restricted to k-OD triangulations for k ≤ 7. In particular, instances for which the number of connected components of the so-called k-OD fixed-edge graph is small can be solved within few seconds.

Cite as

Anna Arutyunova, Anne Driemel, Jan-Henrik Haunert, Herman Haverkort, Jürgen Kusche, Elmar Langetepe, Philip Mayer, Petra Mutzel, and Heiko Röglin. Minimum-Error Triangulations for Sea Surface Reconstruction. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{arutyunova_et_al:LIPIcs.SoCG.2022.7,
  author =	{Arutyunova, Anna and Driemel, Anne and Haunert, Jan-Henrik and Haverkort, Herman and Kusche, J\"{u}rgen and Langetepe, Elmar and Mayer, Philip and Mutzel, Petra and R\"{o}glin, Heiko},
  title =	{{Minimum-Error Triangulations for Sea Surface Reconstruction}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.7},
  URN =		{urn:nbn:de:0030-drops-160155},
  doi =		{10.4230/LIPIcs.SoCG.2022.7},
  annote =	{Keywords: Minimum-Error Triangulation, k-Order Delaunay Triangulations, Data dependent Triangulations, Sea Surface Reconstruction, fixed-Edge Graph}
}
Document
APPROX
Upper and Lower Bounds for Complete Linkage in General Metric Spaces

Authors: Anna Arutyunova, Anna Großwendt, Heiko Röglin, Melanie Schmidt, and Julian Wargalla

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
In a hierarchical clustering problem the task is to compute a series of mutually compatible clusterings of a finite metric space (P,dist). Starting with the clustering where every point forms its own cluster, one iteratively merges two clusters until only one cluster remains. Complete linkage is a well-known and popular algorithm to compute such clusterings: in every step it merges the two clusters whose union has the smallest radius (or diameter) among all currently possible merges. We prove that the radius (or diameter) of every k-clustering computed by complete linkage is at most by factor O(k) (or O(k²)) worse than an optimal k-clustering minimizing the radius (or diameter). Furthermore we give a negative answer to the question proposed by Dasgupta and Long [Sanjoy Dasgupta and Philip M. Long, 2005], who show a lower bound of Ω(log(k)) and ask if the approximation guarantee is in fact Θ(log(k)). We present instances where complete linkage performs poorly in the sense that the k-clustering computed by complete linkage is off by a factor of Ω(k) from an optimal solution for radius and diameter. We conclude that in general metric spaces complete linkage does not perform asymptotically better than single linkage, merging the two clusters with smallest inter-cluster distance, for which we prove an approximation guarantee of O(k).

Cite as

Anna Arutyunova, Anna Großwendt, Heiko Röglin, Melanie Schmidt, and Julian Wargalla. Upper and Lower Bounds for Complete Linkage in General Metric Spaces. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 18:1-18:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arutyunova_et_al:LIPIcs.APPROX/RANDOM.2021.18,
  author =	{Arutyunova, Anna and Gro{\ss}wendt, Anna and R\"{o}glin, Heiko and Schmidt, Melanie and Wargalla, Julian},
  title =	{{Upper and Lower Bounds for Complete Linkage in General Metric Spaces}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{18:1--18:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.18},
  URN =		{urn:nbn:de:0030-drops-147115},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.18},
  annote =	{Keywords: Hierarchical Clustering, Complete Linkage, agglomerative Clustering, k-Center}
}
Document
Bicriteria Aggregation of Polygons via Graph Cuts

Authors: Peter Rottmann, Anne Driemel, Herman Haverkort, Heiko Röglin, and Jan-Henrik Haunert

Published in: LIPIcs, Volume 208, 11th International Conference on Geographic Information Science (GIScience 2021) - Part II


Abstract
We present a new method for the task of detecting groups of polygons in a given geographic data set and computing a representative polygon for each group. This task is relevant in map generalization where the aim is to derive a less detailed map from a given map. Following a classical approach, we define the output polygons by merging the input polygons with a set of triangles that we select from a constrained Delaunay triangulation of the input polygons' exterior. The innovation of our method is to compute the selection of triangles by solving a bicriteria optimization problem. While on the one hand we aim at minimizing the total area of the outputs polygons, we aim on the other hand at minimizing their total perimeter. We combine these two objectives in a weighted sum and study two computational problems that naturally arise. In the first problem, the parameter that balances the two objectives is fixed and the aim is to compute a single optimal solution. In the second problem, the aim is to compute a set containing an optimal solution for every possible value of the parameter. We present efficient algorithms for these problems based on computing a minimum cut in an appropriately defined graph. Moreover, we show how the result set of the second problem can be approximated with few solutions. In an experimental evaluation, we finally show that the method is able to derive settlement areas from building footprints that are similar to reference solutions.

Cite as

Peter Rottmann, Anne Driemel, Herman Haverkort, Heiko Röglin, and Jan-Henrik Haunert. Bicriteria Aggregation of Polygons via Graph Cuts. In 11th International Conference on Geographic Information Science (GIScience 2021) - Part II. Leibniz International Proceedings in Informatics (LIPIcs), Volume 208, pp. 6:1-6:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{rottmann_et_al:LIPIcs.GIScience.2021.II.6,
  author =	{Rottmann, Peter and Driemel, Anne and Haverkort, Herman and R\"{o}glin, Heiko and Haunert, Jan-Henrik},
  title =	{{Bicriteria Aggregation of Polygons via Graph Cuts}},
  booktitle =	{11th International Conference on Geographic Information Science (GIScience 2021) - Part II},
  pages =	{6:1--6:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-208-2},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{208},
  editor =	{Janowicz, Krzysztof and Verstegen, Judith A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2021.II.6},
  URN =		{urn:nbn:de:0030-drops-147658},
  doi =		{10.4230/LIPIcs.GIScience.2021.II.6},
  annote =	{Keywords: map generalization, aggregation, graph cuts, bicriteria optimization}
}
Document
Noisy, Greedy and Not so Greedy k-Means++

Authors: Anup Bhattacharya, Jan Eube, Heiko Röglin, and Melanie Schmidt

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
The k-means++ algorithm due to Arthur and Vassilvitskii [David Arthur and Sergei Vassilvitskii, 2007] has become the most popular seeding method for Lloyd’s algorithm. It samples the first center uniformly at random from the data set and the other k-1 centers iteratively according to D²-sampling, i.e., the probability that a data point becomes the next center is proportional to its squared distance to the closest center chosen so far. k-means++ is known to achieve an approximation factor of 𝒪(log k) in expectation. Already in the original paper on k-means++, Arthur and Vassilvitskii suggested a variation called greedy k-means++ algorithm in which in each iteration multiple possible centers are sampled according to D²-sampling and only the one that decreases the objective the most is chosen as a center for that iteration. It is stated as an open question whether this also leads to an 𝒪(log k)-approximation (or even better). We show that this is not the case by presenting a family of instances on which greedy k-means++ yields only an Ω(𝓁⋅log k)-approximation in expectation where 𝓁 is the number of possible centers that are sampled in each iteration. Inspired by the negative results, we study a variation of greedy k-means++ which we call noisy k-means++ algorithm. In this variation only one center is sampled in every iteration but not exactly by D²-sampling. Instead in each iteration an adversary is allowed to change the probabilities arising from D²-sampling individually for each point by a factor between 1-ε₁ and 1+ε₂ for parameters ε₁ ∈ [0,1) and ε₂ ≥ 0. We prove that noisy k-means++ computes an 𝒪(log² k)-approximation in expectation. We use the analysis of noisy k-means++ to design a moderately greedy k-means++ algorithm.

Cite as

Anup Bhattacharya, Jan Eube, Heiko Röglin, and Melanie Schmidt. Noisy, Greedy and Not so Greedy k-Means++. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 18:1-18:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ESA.2020.18,
  author =	{Bhattacharya, Anup and Eube, Jan and R\"{o}glin, Heiko and Schmidt, Melanie},
  title =	{{Noisy, Greedy and Not so Greedy k-Means++}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{18:1--18:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.18},
  URN =		{urn:nbn:de:0030-drops-128848},
  doi =		{10.4230/LIPIcs.ESA.2020.18},
  annote =	{Keywords: k-means++, greedy, adaptive sampling}
}
Document
Probabilistic Methods in the Design and Analysis of Algorithms (Dagstuhl Seminar 17141)

Authors: Bodo Manthey, Claire Mathieu, Heiko Röglin, and Eli Upfal

Published in: Dagstuhl Reports, Volume 7, Issue 4 (2018)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 17141 "Probabilistic Methods in the Design and Analysis of Algorithms". Probabilistic methods play a central role in theoretical computer science. They are a powerful and widely applied tool used, for example, for designing efficient randomized algorithms and for establishing various lower bounds in complexity theory. They also form the basis of frameworks like average-case and smoothed analysis, in which algorithms are analyzed beyond the classical worst-case perspective. The seminar was on probabilistic methods with a focus on the design and analysis of algorithms. The seminar helped to consolidate the research and to foster collaborations among the researchers who use probabilistic methods in different areas of the design and analysis of algorithms.

Cite as

Bodo Manthey, Claire Mathieu, Heiko Röglin, and Eli Upfal. Probabilistic Methods in the Design and Analysis of Algorithms (Dagstuhl Seminar 17141). In Dagstuhl Reports, Volume 7, Issue 4, pp. 1-22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{manthey_et_al:DagRep.7.4.1,
  author =	{Manthey, Bodo and Mathieu, Claire and R\"{o}glin, Heiko and Upfal, Eli},
  title =	{{Probabilistic Methods in the Design and Analysis of Algorithms (Dagstuhl Seminar 17141)}},
  pages =	{1--22},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{7},
  number =	{4},
  editor =	{Manthey, Bodo and Mathieu, Claire and R\"{o}glin, Heiko and Upfal, Eli},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.7.4.1},
  URN =		{urn:nbn:de:0030-drops-75452},
  doi =		{10.4230/DagRep.7.4.1},
  annote =	{Keywords: analysis of algorithms, average-case analysis, random graphs, randomized algorithms, smoothed analysis, sub-linear algorithms}
}
Document
The Alternating Stock Size Problem and the Gasoline Puzzle

Authors: Alantha Newman, Heiko Röglin, and Johanna Seif

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
Given a set S of integers whose sum is zero, consider the problem of finding a permutation of these integers such that: (i) all prefixes of the ordering are non-negative, and (ii) the maximum value of a prefix sum is minimized. Kellerer et al. referred to this problem as the stock size problem and showed that it can be approximated to within 3/2. They also showed that an approximation ratio of 2 can be achieved via several simple algorithms. We consider a related problem, which we call the alternating stock size problem, where the number of positive and negative integers in the input set S are equal. The problem is the same as above, but we are additionally required to alternate the positive and negative numbers in the output ordering. This problem also has several simple 2-approximations. We show that it can be approximated to within 1.79. Then we show that this problem is closely related to an optimization version of the gasoline puzzle due to Lovász, in which we want to minimize the size of the gas tank necessary to go around the track. We present a 2-approximation for this problem, using a natural linear programming relaxation whose feasible solutions are doubly stochastic matrices. Our novel rounding algorithm is based on a transformation that yields another doubly stochastic matrix with special properties, from which we can extract a suitable permutation.

Cite as

Alantha Newman, Heiko Röglin, and Johanna Seif. The Alternating Stock Size Problem and the Gasoline Puzzle. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 71:1-71:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{newman_et_al:LIPIcs.ESA.2016.71,
  author =	{Newman, Alantha and R\"{o}glin, Heiko and Seif, Johanna},
  title =	{{The Alternating Stock Size Problem and the Gasoline Puzzle}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{71:1--71:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.71},
  URN =		{urn:nbn:de:0030-drops-64134},
  doi =		{10.4230/LIPIcs.ESA.2016.71},
  annote =	{Keywords: approximation algorithms, stock size problem, scheduling with non-renewable resources}
}
Document
Solving Totally Unimodular LPs with the Shadow Vertex Algorithm

Authors: Tobias Brunsch, Anna Großwendt, and Heiko Röglin

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
We show that the shadow vertex simplex algorithm can be used to solve linear programs in strongly polynomial time with respect to the number n of variables, the number m of constraints, and 1/\delta, where \delta is a parameter that measures the flatness of the vertices of the polyhedron. This extends our recent result that the shadow vertex algorithm finds paths of polynomial length (w.r.t. n, m, and 1/delta) between two given vertices of a polyhedron [4]. Our result also complements a recent result due to Eisenbrand and Vempala [6] who have shown that a certain version of the random edge pivot rule solves linear programs with a running time that is strongly polynomial in the number of variables n and 1/\delta, but independent of the number m of constraints. Even though the running time of our algorithm depends on m, it is significantly faster for the important special case of totally unimodular linear programs, for which 1/delta\le n and which have only O(n^2) constraints.

Cite as

Tobias Brunsch, Anna Großwendt, and Heiko Röglin. Solving Totally Unimodular LPs with the Shadow Vertex Algorithm. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 171-183, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{brunsch_et_al:LIPIcs.STACS.2015.171,
  author =	{Brunsch, Tobias and Gro{\ss}wendt, Anna and R\"{o}glin, Heiko},
  title =	{{Solving Totally Unimodular LPs with the Shadow Vertex Algorithm}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{171--183},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.171},
  URN =		{urn:nbn:de:0030-drops-49125},
  doi =		{10.4230/LIPIcs.STACS.2015.171},
  annote =	{Keywords: linear optimization, simplex algorithm, shadow vertex method}
}
Document
Analysis of Algorithms Beyond the Worst Case (Dagstuhl Seminar 14372)

Authors: Marina-Florina Balcan, Bodo Manthey, Heiko Röglin, and Tim Roughgarden

Published in: Dagstuhl Reports, Volume 4, Issue 9 (2015)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 14372 "Analysis of Algorithms Beyond the Worst Case". The theory of algorithms has traditionally focused on worst-case analysis. This focus has led to both a deep theory and many beautiful and useful algorithms. However, there are a number of important problems and algorithms for which worst-case analysis does not provide useful or empirically accurate results. This is due to the fact that worst-case inputs are often rather contrived and occur hardly ever in practical applications. Only in recent years a paradigm shift towards a more realistic and robust algorithmic theory has been initiated. The development of a more realistic theory hinges on finding models that measure the performance of an algorithm not only by its worst-case behavior but rather by its behavior on "typical" inputs. In this seminar, we discussed various recent theoretical models and results that go beyond worst-case analysis. The seminar helped to consolidate the research and to foster collaborations among the researchers working in the different branches of analysis of algorithms beyond the worst case.

Cite as

Marina-Florina Balcan, Bodo Manthey, Heiko Röglin, and Tim Roughgarden. Analysis of Algorithms Beyond the Worst Case (Dagstuhl Seminar 14372). In Dagstuhl Reports, Volume 4, Issue 9, pp. 30-49, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{balcan_et_al:DagRep.4.9.30,
  author =	{Balcan, Marina-Florina and Manthey, Bodo and R\"{o}glin, Heiko and Roughgarden, Tim},
  title =	{{Analysis of Algorithms Beyond the Worst Case (Dagstuhl Seminar 14372)}},
  pages =	{30--49},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{4},
  number =	{9},
  editor =	{Balcan, Marina-Florina and Manthey, Bodo and R\"{o}glin, Heiko and Roughgarden, Tim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.4.9.30},
  URN =		{urn:nbn:de:0030-drops-48829},
  doi =		{10.4230/DagRep.4.9.30},
  annote =	{Keywords: analysis of algorithms, probabilistic analysis, smoothed analysis, approximation stability, machine learning}
}
Document
Economical Caching

Authors: Matthias Englert, Heiko Röglin, Jacob Spönemann, and Berthold Vöcking

Published in: LIPIcs, Volume 3, 26th International Symposium on Theoretical Aspects of Computer Science (2009)


Abstract
We study the management of buffers and storages in environments with unpredictably varying prices in a competitive analysis. In the economical caching problem, there is a storage with a certain capacity. For each time step, an online algorithm is given a price from the interval $[1,\alpha]$, a consumption, and possibly a buying limit. The online algorithm has to decide the amount to purchase from some commodity, knowing the parameter $\alpha$ but without knowing how the price evolves in the future. The algorithm can purchase at most the buying limit. If it purchases more than the current consumption, then the excess is stored in the storage; otherwise, the gap between consumption and purchase must be taken from the storage. The goal is to minimize the total cost. Interesting applications are, for example, stream caching on mobile devices with different classes of service, battery management in micro hybrid cars, and the efficient purchase of resources. First we consider the simple but natural class of algorithms that can informally be described as memoryless. We show that these algorithms cannot achieve a competitive ratio below $\sqrt{\alpha}$. Then we present a more sophisticated deterministic algorithm achieving a competitive ratio of \[\textstyle \frac{1}{W\left(\frac{1-\alpha}{e\alpha}\right)+1} \in \left[\frac{\sqrt{\alpha}}{\sqrt{2}}, \frac{\sqrt{\alpha}+1}{\sqrt{2}} \right] \enspace, \] where $W$ denotes the Lambert~W function. We prove that this algorithm is optimal and that not even randomized online algorithms can achieve a better competitive ratio. On the other hand, we show how to achieve a constant competitive ratio if the storage capacity of the online algorithm exceeds the storage capacity of an optimal offline algorithm by a factor of $\log \alpha$.

Cite as

Matthias Englert, Heiko Röglin, Jacob Spönemann, and Berthold Vöcking. Economical Caching. In 26th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 3, pp. 385-396, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{englert_et_al:LIPIcs.STACS.2009.1826,
  author =	{Englert, Matthias and R\"{o}glin, Heiko and Sp\"{o}nemann, Jacob and V\"{o}cking, Berthold},
  title =	{{Economical Caching}},
  booktitle =	{26th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{385--396},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-09-5},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{3},
  editor =	{Albers, Susanne and Marion, Jean-Yves},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2009.1826},
  URN =		{urn:nbn:de:0030-drops-18263},
  doi =		{10.4230/LIPIcs.STACS.2009.1826},
  annote =	{Keywords: Online algorithms, Competitive analysis, Storage management}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail